69 research outputs found

    Input-driven components of spike-frequency adaptation can be unmasked in vivo

    Get PDF
    Spike-frequency adaptation affects the response characteristics of many sensory neurons, and different biophysical processes contribute to this phenomenon. Many cellular mechanisms underlying adaptation are triggered by the spike output of the neuron in a feedback manner (e.g., specific potassium currents that are primarily activated by the spiking activity). In contrast, other components of adaptation may be caused by, in a feedforward way, the sensory or synaptic input, which the neuron receives. Examples include viscoelasticity of mechanoreceptors, transducer adaptation in hair cells, and short-term synaptic depression. For a functional characterization of spike-frequency adaptation, it is essential to understand the dependence of adaptation on the input and output of the neuron. Here, we demonstrate how an input-driven component of adaptation can be uncovered in vivo from recordings of spike trains in an insect auditory receptor neuron, even if the total adaptation is dominated by output-driven components. Our method is based on the identification of different inputs that yield the same output and sudden switches between these inputs. In particular, we determined for different sound frequencies those intensities that are required to yield a predefined steady-state firing rate of the neuron. We then found that switching between these sound frequencies causes transient deviations of the firing rate. These firing-rate deflections are evidence of input-driven adaptation and can be used to quantify how this adaptation component affects the neural activity. Based on previous knowledge of the processes in auditory transduction, we conclude that for the investigated auditory receptor neurons, this adaptation phenomenon is of mechanical origin

    The iso-response method

    Get PDF
    Throughout the nervous system, neurons integrate high-dimensional input streams and transform them into an output of their own. This integration of incoming signals involves filtering processes and complex non-linear operations. The shapes of these filters and non-linearities determine the computational features of single neurons and their functional roles within larger networks. A detailed characterization of signal integration is thus a central ingredient to understanding information processing in neural circuits. Conventional methods for measuring single-neuron response properties, such as reverse correlation, however, are often limited by the implicit assumption that stimulus integration occurs in a linear fashion. Here, we review a conceptual and experimental alternative that is based on exploring the space of those sensory stimuli that result in the same neural output. As demonstrated by recent results in the auditory and visual system, such iso-response stimuli can be used to identify the non-linearities relevant for stimulus integration, disentangle consecutive neural processing steps, and determine their characteristics with unprecedented precision. Automated closed-loop experiments are crucial for this advance, allowing rapid search strategies for identifying iso-response stimuli during experiments. Prime targets for the method are feed-forward neural signaling chains in sensory systems, but the method has also been successfully applied to feedback systems. Depending on the specific question, “iso-response” may refer to a predefined firing rate, single-spike probability, first-spike latency, or other output measures. Examples from different studies show that substantial progress in understanding neural dynamics and coding can be achieved once rapid online data analysis and stimulus generation, adaptive sampling, and computational modeling are tightly integrated into experiments

    Energy integration describes sound-intensity coding in an insect auditory system

    Get PDF
    We investigate the transduction of sound stimuli into neural responses and focus on locust auditory receptor cells. As in other mechanosensory model systems, these neurons integrate acoustic inputs over a fairly broad frequency range. To test three alternative hypotheses about the nature of this spectral integration (amplitude, energy, pressure), we perform intracellular recordings while stimulating with superpositions of pure tones. On the basis of online data analysis and automatic feedback to the stimulus generator, we systematically explore regions in stimulus space that lead to the same level of neural activity. Focusing on such iso-firing-rate regions allows for a rigorous quantitative comparison of the electrophysiological data with predictions from the three hypotheses that is independent of nonlinearities induced by the spike dynamics. We find that the dependence of the firing rates of the receptors on the composition of the frequency spectrum can be well described by an energy-integrator model. This result holds at stimulus onset as well as for the steady-state response, including the case in which adaptation effects depend on the stimulus spectrum. Predictions of the model for the responses to bandpass-filtered noise stimuli are verified accurately. Together, our data suggest that the sound-intensity coding of the receptors can be understood as a three-step process, composed of a linear filter, a summation of the energy contributions in the frequency domain, and a firing-rate encoding of the resulting effective sound intensity. These findings set quantitative constraints for future biophysical models

    Local and Global Contrast Adaptation in Retinal Ganglion Cells

    Get PDF
    SummaryRetinal ganglion cells react to changes in visual contrast by adjusting their sensitivity and temporal filtering characteristics. This contrast adaptation has primarily been studied under spatially homogeneous stimulation. Yet, ganglion cell receptive fields are often characterized by spatial subfields, providing a substrate for nonlinear spatial processing. This raises the question whether contrast adaptation follows a similar subfield structure or whether it occurs globally over the receptive field even for local stimulation. We therefore recorded ganglion cell activity in isolated salamander retinas while locally changing visual contrast. Ganglion cells showed primarily global adaptation characteristics, with notable exceptions in certain aspects of temporal filtering. Surprisingly, some changes in filtering were most pronounced for locations where contrast did not change. This seemingly paradoxical effect can be explained by a simple computational model, which emphasizes the importance of local nonlinearities in the retina and suggests a reevaluation of previously reported local contrast adaptation

    Disentangling Sub-Millisecond Processes within an Auditory Transduction Chain

    Get PDF
    Every sensation begins with the conversion of a sensory stimulus into the response of a receptor neuron. Typically, this involves a sequence of multiple biophysical processes that cannot all be monitored directly. In this work, we present an approach that is based on analyzing different stimuli that cause the same final output, here defined as the probability of the receptor neuron to fire a single action potential. Comparing such iso-response stimuli within the framework of nonlinear cascade models allows us to extract the characteristics of individual signal-processing steps with a temporal resolution much finer than the trial-to-trial variability of the measured output spike times. Applied to insect auditory receptor cells, the technique reveals the sub-millisecond dynamics of the eardrum vibration and of the electrical potential and yields a quantitative four-step cascade model. The model accounts for the tuning properties of this class of neurons and explains their high temporal resolution under natural stimulation. Owing to its simplicity and generality, the presented method is readily applicable to other nonlinear cascades and a large variety of signal-processing systems

    Neural Circuit Inference from Function to Structure

    Get PDF
    Advances in technology are opening new windows on the structural connectivity and functional dynamics of brain circuits. Quantitative frameworks are needed that integrate these data from anatomy and physiology. Here, we present a modeling approach that creates such a link. The goal is to infer the structure of a neural circuit from sparse neural recordings, using partial knowledge of its anatomy as a regularizing constraint. We recorded visual responses from the output neurons of the retina, the ganglion cells. We then generated a systematic sequence of circuit models that represents retinal neurons and connections and fitted them to the experimental data. The optimal models faithfully recapitulated the ganglion cell outputs. More importantly, they made predictions about dynamics and connectivity among unobserved neurons internal to the circuit, and these were subsequently confirmed by experiment. This circuit inference framework promises to facilitate the integration and understanding of big data in neuroscience

    Equation of State for Helium-4 from Microphysics

    Full text link
    We compute the free energy of helium-4 near the lambda transition based on an exact renormalization-group equation. An approximate solution permits the determination of universal and nonuniversal thermodynamic properties starting from the microphysics of the two-particle interactions. The method does not suffer from infrared divergences. The critical chemical potential agrees with experiment. This supports a specific formulation of the functional integral that we have proposed recently. Our results for the equation of state reproduce the observed qualitative behavior. Despite certain quantitative shortcomings of our approximation, this demonstrates that ab initio calculations for collective phenomena become possible by modern renormalization-group methods.Comment: 9 pages, 6 figures, revtex updated version, journal referenc

    Bio-Inspired Approach to Modelling Retinal Ganglion Cells using System Identification Techniques

    Get PDF
    The processing capabilities of biological vision systems are still vastly superior to artificial vision, even though this has been an active area of research for over half a century. Current artificial vision techniques integrate many insights from biology yet they remain far-off the capabilities of animals and humans in terms of speed, power, and performance. A key aspect to modeling the human visual system is the ability to accurately model the behavior and computation within the retina. In particular, we focus on modeling the retinal ganglion cells (RGCs) as they convey the accumulated data of real world images as action potentials onto the visual cortex via the optic nerve. Computational models that approximate the processing that occurs within RGCs can be derived by quantitatively fitting the sets of physiological data using an input–output analysis where the input is a known stimulus and the output is neuronal recordings. Currently, these input–output responses are modeled using computational combinations of linear and nonlinear models that are generally complex and lack any relevance to the underlying biophysics. In this paper, we illustrate how system identification techniques, which take inspiration from biological systems, can accurately model retinal ganglion cell behavior, and are a viable alternative to traditional linear–nonlinear approaches
    corecore